Goto

Collaborating Authors

 few-shot affordance learning


Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

Neural Information Processing Systems

Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robots to perform a few interactions with unseen objects.


Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

Neural Information Processing Systems

Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robots to perform a few interactions with unseen objects. Recognizing this limitation, we observe that despite their distinct shapes, different categories often share similar local geometries essential for manipulation, such as pullable handles and graspable edges - a factor typically underutilized in previous few-shot learning works. To harness this commonality, we introduce'Where2Explore', an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances.